Goto

Collaborating Authors

 multi-task learning method


Li

AAAI Conferences

Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interdependence between them. The basic assumption in MTL is that those tasks are indeed related.

  interdependence, multi-task learning method

Learning Without Forgetting Simplified

#artificialintelligence

Deep learning has recently become a dominant approach in computer vision tasks thanks to convolutional neural networks (CNNs). A CNN has to be trained well before being deployed to real-world applications, yet unfortunately, sufficient training data is not always available. In this sense, transfer learning is invented to take advantage of the knowledge of a pre-trained model which is trained on a sufficient database to solve other relevant problems. However, transfer learning commonly does not consider the performance of the model on the previous tasks, in other words, CNNs may forget what they had learned before when the knowledge now is transferred to another task. For instance, a pre-trained CNN classifier which had been trained to classify vehicle types is utilized to perform car genre classification using transfer learning, the fact is that the model now can work well on recognizing car genres, yet it underperforms in vehicle types classification unlike what it used to do. That example shows the biggest shortcoming of transfer learning.


On Better Exploring and Exploiting Task Relationships in Multi-Task Learning: Joint Model and Feature Learning

Li, Ya, Tian, Xinmei, Liu, Tongliang, Tao, Dacheng

arXiv.org Artificial Intelligence

Multitask learning (MTL) aims to learn multiple tasks simultaneously through the interdependence between different tasks. The way to measure the relatedness between tasks is always a popular issue. There are mainly two ways to measure relatedness between tasks: common parameters sharing and common features sharing across different tasks. However, these two types of relatedness are mainly learned independently, leading to a loss of information. In this paper, we propose a new strategy to measure the relatedness that jointly learns shared parameters and shared feature representations. The objective of our proposed method is to transform the features from different tasks into a common feature space in which the tasks are closely related and the shared parameters can be better optimized. We give a detailed introduction to our proposed multitask learning method. Additionally, an alternating algorithm is introduced to optimize the nonconvex objection. A theoretical bound is given to demonstrate that the relatedness between tasks can be better measured by our proposed multitask learning algorithm. We conduct various experiments to verify the superiority of the proposed joint model and feature a multitask learning method.


Learning Implicit Tasks for Patient-Specific Risk Modeling in ICU

Nori, Nozomi (Kyoto University) | Kashima, Hisashi (Kyoto University) | Yamashita, Kazuto (Kyoto University) | Kunisawa, Susumu (Kyoto University) | Imanaka, Yuichi (Kyoto University)

AAAI Conferences

Accurate assessment of the severity of a patient’s condition plays a fundamental role in acute hospital care such as that provided in an intensive care unit (ICU). ICU clinicians are required to make sense of a large amount of clinical data in a limited time to estimate the severity of a patient’s condition, which ultimately leads to the planning of appropriate care. The ICU is an especially demanding environment for clinicians because of the diversity of patients who mostly suffer from multiple diseases of various types. In this paper, we propose a mortality risk prediction method for ICU patients. The method is intended to enhance the severity assessment by considering the diversity of patients. Our method produces patient-specific risk models that reflect the collection of diseases associated with the patient. Specifically, we assume a small number of latent basis tasks, where each latent task is associated with its own parameter vector; a parameter vector for a specific patient is constructed as a linear combination of these. The latent representation of a patient, namely, the coefficients of the combination, is learned based on the collection of diseases associated with the patient. Our method could be considered a multi-task learning method where latent tasks are learned based on the collection of diseases. We demonstrate the effectiveness of our proposed method using a dataset collected from a hospital. Our method achieved higher predictive performance compared with a single-task learning method, the “de facto standard,” and several multi-task learning methods including a recently proposed method for ICU mortality risk prediction. Furthermore, our proposed method could be used not only for predictions but also for uncovering patient-specificity from different viewpoints.


Multi-Task Model and Feature Joint Learning

Li, Ya (University of Science and Technology of China) | Tian, Xinmei (University of Science and Technology of China) | Liu, Tongliang (University of Technology, Sydney) | Tao, Dacheng (University of Technology, Sydney)

AAAI Conferences

Given several tasks, multi-task learning (MTL) learns multiple tasks jointly by exploring the interdependence between them. The basic assumption in MTL is that those tasks are indeed related. Existing MTL methods model the task relatedness/interdependence in two different ways, either common parameter-sharing or common feature-sharing across tasks. In this paper, we propose a novel multi-task learning method to jointly learn shared parameters and shared feature representation. Our objective is to learn a set of common features with which the tasks are related as closely as possible, therefore common parameters shared across tasks can be optimally learned. We present a detailed deviation of our multi-task learning method and propose an alternating algorithm to solve the non-convex optimization problem. We further present a theoretical bound which directly demonstrates that the proposed multi-task learning method can successfully model the relatedness via joint common parameter- and common feature-learning. Extensive experiments are conducted on several real world multi-task learning datasets. All results demonstrate the effectiveness of our multi-task model and feature joint learning method.